Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Intrusion detection model based on combination of dilated convolution and gated recurrent unit
ZHANG Quanlong, WANG Huaibin
Journal of Computer Applications    2021, 41 (5): 1372-1377.   DOI: 10.11772/j.issn.1001-9081.2020071082
Abstract322)      PDF (936KB)(546)       Save
Intrusion detection model based on machine learning plays a vital role in the security protection of network environment. Aiming at the problem that the existing network intrusion detection model cannot fully learn the data features of network intrusion, the deep learning theory was applied to intrusion detection, and a deep network model with automatic feature extraction function was proposed. In this model, the dilated convolution was used to increase the receptive field of information and extract high-level features from it, the Gated Recurrent Unit (GRU) model was used to extract long-term dependencies between retained features, then the Deep Neural Network (DNN) was used to fully learn the data features. Compared with the classical machine learning classifier, this model has a higher detection rate. Experiments conducted on the famous KDD CUP99, NSL-KDD and UNSW-NB15 datasets show that the model has the performance better than other classifiers. Specifically, the model has the accuracy of 99.78% on KDD CUP99 dataset, the accuracy of 99.53% on NSL-KDD dataset, and the accuracy of 93.12% on UNSW-NB15 dataset.
Reference | Related Articles | Metrics
Human action recognition based on coupled multi-Hidden Markov model and depth image data
ZHANG Quangui, CAI Feng, LI Zhiqiang
Journal of Computer Applications    2018, 38 (2): 454-457.   DOI: 10.11772/j.issn.1001-9081.2017081945
Abstract589)      PDF (607KB)(461)       Save
In order to solve the problem that the feature extraction is easy to be affected by external factors and the computational complexity is high, the depth data was used for human action recognition, which is a more effective solution scheme. Using the joint data collected by Kinect, the human joint was divided into five regions. The vector angle of each region was discretized to describe different states, and then Baum-Welch algorithm was used to study multi-Hidden Markov Model (multi-HMM), meanwhile, forward algorithm was used to establish the generation region and action class probability matrix. On this basis, the region and action categories were intra-coupled and inter-coupled to analyze, thus expressing the interaction between the joints. Finally, the K-Nearest Neighbors (KNN) algorithm based on coupling was used to complete the action recognition. The experimental results show that the recognition rates of the five actions reach above 90%, and the comprehensive recognition rate is higher than that of the contrast methods such as 3D Trajecttories, which means that the proposed algorithm has obvious advantages.
Reference | Related Articles | Metrics
Stationary wavelet domain deep residual convolutional neural network for low-dose computed tomography image estimation
GAO Jingzhi, LIU Yi, BAI Xu, ZHANG Quan, GUI Zhiguo
Journal of Computer Applications    2018, 38 (12): 3584-3590.   DOI: 10.11772/j.issn.1001-9081.2018040833
Abstract405)      PDF (1168KB)(317)       Save
Concerning the problem of a large amount of noise in Low-Dose Computed Tomography (LDCT) reconstructed images, a deep residual Convolutional Neural Network for Stationary Wavelet Transform (SWT-CNN) model was proposed to estimate Normal-Dose Computed Tomography (NDCT) image from LDCT image. In training phase, the high-frequency coefficients of LDCT images after Stationary Wavelet Transform (SWT) three-level decomposition were taken as inputs, the residual coefficients were obtained by subtracting the high-frequency coefficients of NDCT images from high-frequency coefficients of LDCT images were taken as labels, and the mapping relationship between inputs and labels could be learned by deep CNN. In testing phase, the high-frequency coefficients of NDCT image could be predicted from the high-frequency coefficients of LDCT image by using this mapping relationship. Finally, the predicted NDCT image could be reconstructed by Stationary Wavelet Inverse Transform (ISWT). With the size of 512 x 512, 50 pairs of normal-dose chest and abdominal scan sections of the same phantom and reconstructed images with noise added to the projection field were used as data sets, of which 45 pairs constituted a training set and the remaining 5 pairs constituted a test set. The SWT-CNN model was compared with the-state-of-the-art methods, such as Non-Local Means (NLM), K-Singular Value Decomposition (K-SVD) algorithm, Block-Matching and 3D filtering (BM3D), and Image domain CNN (Image-CNN). The experimental results show that, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) of NDCT image predicted by SWT-CNN model are higher, and its Root Mean Square Error (RMSE) is smaller than that of other algorithms. The proposed model is feasible and effective in improving the quality of low-dose CT images.
Reference | Related Articles | Metrics
Combination of improved diffusion and bilateral filtering for low-dose CT reconstruction
ZHANG Pengcheng, ZHANG Quan, ZHANG Fang, CHEN Yan, HAN Jianning, HAO Huiyan, GUI Zhiguo
Journal of Computer Applications    2016, 36 (4): 1100-1105.   DOI: 10.11772/j.issn.1001-9081.2016.04.1100
Abstract482)      PDF (973KB)(403)       Save
Median Prior (MP) reconstruction algorithm combined with nonlocal means fuzzy diffusion and extended neighborhood bilateral filter was proposed to reduce the streak artifacts in low-dose Computed Tomography (CT) reconstruction. In the new algorithm, the nonlocal means fuzzy diffusion method was used to improve the median of the prior distribution Maximum A Posterior (MAP) reconstruction algorithm at first, which reduced the noise in the reconstruction image; then, the bilateral filtering method based on the expended neighborhood was applied to preserve the edges and details of the reconstruction image and improve the Signal-to-Noise Ratio (SNR). The Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed method has the smaller values of the Normalized Mean Square Distance (NMSD) and Mean Absolute Error (MAE) and the highest SNR (10.20 dB and 15.51 dB, respectively) in the two experiment images, compared with Filtered Back Projection (FBP), Median Root Prior (MRP), NonLocal Mean MP (NLMMP) and NonLocal Mean Bilateral Filter MP (NLMBFMP) algorithms. The experimental results show that the proposed reconstruction algorithm can reduce noise while keeping the edges and details of the image, which improves the deterioration problem of the low-dose CT image and obtains the image with higher SNR and quality.
Reference | Related Articles | Metrics
Statistical iterative algorithm based on adaptive weighted total variation for low-dose CT
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Wen, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2016, 36 (10): 2916-2921.   DOI: 10.11772/j.issn.1001-9081.2016.10.2916
Abstract459)      PDF (888KB)(405)       Save
Concerning the streak artifacts and impulse noise of the Low-Dose Computed Tomography (LDCT) reconstructed images, a statistical iterative reconstruction method based on adaptive weighted Total Variation (TV) for LDCT was presented. Considering the shortage that traditional TV may bring staircase effect while suppressing streak artifacts, an adaptive weighted TV model that combined the weighting factor based on weighted variation and TV model was proposed. Then, the new model was applied to the Penalized Weighted Least Square (PWLS). Different areas of the image were processed with different de-noising intensities, so as to achieve a good effect of noise suppression and edge preservation. The Shepp-Logan model and the digital pelvis phantom were used to test the effectiveness of the proposed algorithm. Experimental results show that the proposed method has smaller Normalized Mean Square Distance (NMSD) and Normal Average Absolute Distance (NAAD) in the two experiment images, compared with the Filtered Back Projection (FBP), PWLS, PWLS-Median Prior (PWLS-MP) and PWLS-TV algorithms. Meanwhile, the proposed method get Peak Signal-To-Noise Ratio (PSNR) of 40.91 dB and 42.25 dB respectively. Experimental results show that the proposed algorithm can well preserve image details and edges, while eliminating streak artifacts effectively.
Reference | Related Articles | Metrics
Adaptive total generalized variation denoising algorithm for low-dose CT images
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Fang, ZHANG Pengcheng, LIU Yi, SUN Weiya, GUI Zhiguo
Journal of Computer Applications    2016, 36 (1): 243-247.   DOI: 10.11772/j.issn.1001-9081.2016.01.0243
Abstract463)      PDF (796KB)(413)       Save
A new denoising algorithm, Adaptive Total Generalized Variation (ATGV), was proposed for removing streak artifacts within the reconstructed image of low-dose Computed Tomography (CT). Considering the shortage that the traditional Total Generalized Variation (TGV) would blur the edge details, the intuitionistic fuzzy entropy which can distinguish the smooth and detail regions was introduced into the TGV algorithm. Different areas of the image were processed with different denoising intensities. As a result, the image details could be well preserved. Firstly, the Filtered Back Projection (FBP) algorithm was used to obtain a reconstructed image. Secondly, the edge indicator function based on intuitive fuzzy entropy was applied to improve the TGV algorithm. Finally, the new algorithm was employed to reduce the noise in the reconstructed image. The simulations of the low-dose CT image reconstruction for the Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm has the smaller values of the Normalized Mean Square Distance (NMSD) and Normalized Average Absolute Distance (NAAD) in the two experiment images, compared with the Total Variation (TV) algorithm and TGV algorithm. Meanwhile, the two experiment images processed with the new method can obtain high Peak Signal-to-Noise Ratios (PSNR) of 26.90 dB and 44.58 dB, respectively. So the proposed algorithm can effectively preserve image details and edges, while reducing streak artifacts.
Reference | Related Articles | Metrics
Symmetry optimization of polar coordinate back-projection reconstruction algorithm for fan beam CT
ZHANG Jing ZHANG Quan LIU Yi GUI Zhiguo
Journal of Computer Applications    2014, 34 (6): 1711-1714.   DOI: 10.11772/j.issn.1001-9081.2014.06.1711
Abstract358)      PDF (592KB)(295)       Save

To improve the speed of image reconstruction based on fan-beam Filtered Back Projection (FBP), a new optimized fast reconstruction method was proposed for polar back-projection algorithm. According to the symmetry feature of trigonometric function, the preprocessing projection datum were back-projected on the polar coordinates at the same time. During the back-projection data coordinate transformation, the computation of bilinear interpolation could be reduced by using the symmetry of the pixel position parameters. The experimental result shows that, compared with the traditional convolution back-projection algorithm, the speed of reconstruction can be improved more than eight times by the proposed method without sacrificing image quality. The new method is also applicable to 3D cone-beam reconstruction, and can be extended to multilayer spiral three-dimensional reconstruction.

Reference | Related Articles | Metrics
High quality positron emission tomography reconstruction algorithm based on correlation coefficient and forward-and-backward diffusion
SHANG Guanhong LIU Yi ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2014, 34 (5): 1482-1485.   DOI: 10.11772/j.issn.1001-9081.2014.05.1482
Abstract234)      PDF (752KB)(349)       Save

In Positron Emission Tomography (PET) computed imaging, traditional iterative algorithms have the problem of details loss and fuzzy object edges. A high quality Median Prior (MP) reconstruction algorithm based on correlation coefficient and Forward-And-Backward (FAB) diffusion was proposed to solve the problem in this paper. Firstly, a characteristic factor called correlation coefficient was introduced to represent the image local gray information. Then through combining the correlation coefficient and forward-and-backward diffusion model, a new model was made up. Secondly, considering that the forward-and-backward diffusion model has the advantages of dealing with background and edge separately, the proposed model was applied to Maximum A Posterior (MAP) reconstruction algorithm of the median prior distribution, thus a median prior reconstruction algorithm based on forward-and-backward diffusion was obtained. The simulation results show that, the new algorithm can remove the image noise while preserving object edges well. The Signal-to-Noise Ratio (SNR) and Root Mean Squared Error (RMSE) also show visually the improvement of the reconstructed image quality.

Reference | Related Articles | Metrics
MLEM low-dose CT reconstruction algorithm based on variable exponent anisotropic diffusion and non-locality
ZHANG Fang CUI Xueying ZHANG Quan DONG Chanchan SUN Weiya BAI Yunjiao GUI Zhiguo
Journal of Computer Applications    2014, 34 (12): 3605-3608.  
Abstract202)      PDF (803KB)(639)       Save

Concerning the serious recession problems of the low-dose Computed Tomography (CT) reconstruction images, a low-dose CT reconstruction method of MLEM based on non-locality and variable exponent was presented. Considering the traditional anisotropic diffusion noise reduction is insufficient, variable exponent which could effectively compromise between heat conduction and anisotropic diffusion P-M models, and the similarity function which could detect the edge and details instead of gradient were applied to the traditional anisotropic diffusion, so as to achieve the desired effect. In each iteration, firstly, the basic MLEM algorithm was used to reconstruct the low-dose projection data. And then the diffusion function was improved by the non-local similarity measure, variable index and fuzzy mathematics theory, and the improved anisotropic diffusion was used to denoise the reconstructed image. Finally median filter was used to eliminate impulse noise points in the image. The experimental results show the proposed algorithm has a smaller numerical value than OS-PLS (Ordered Subsets-Penalized Least Squares), OS-PML-OSL (Ordered Subsets-Penalized Maximum Likelihood-One Step Late), and the algorithm based on the traditional PM, in the variance of Mean Absolute Error (MAE), and Normalized Mean Square Distance (NMSD), especially its Signal-to-Noise Ratio (SNR) is up to 10.52. This algorithm can effectively eliminate the bar of artifacts, and can keep image edges and details information better.

Reference | Related Articles | Metrics
Patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising
DONG Chanchan ZHANG Quan HAO Huiyan ZHANG Fang LIU Yi SUN Weiya GUI Zhiguo
Journal of Computer Applications    2014, 34 (10): 2963-2966.   DOI: 10.11772/j.issn.1001-9081.2014.10.2963
Abstract238)      PDF (815KB)(341)       Save

Concerning the contradiction between edge-preserving and noise-suppressing in the process of image denoising, a patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising was proposed. The algorithm combined adaptive Perona-Malik (PM) model based on variable exponent for image denoising and the idea of patch similarity, constructed a new edge indicator and a new diffusion coefficient function. The traditional anisotropic diffusion algorithms for image denoising based on the intensity similarity of each single pixel (or gradient information) to detect edge cannot effectively preserve weak edges and details such as texture. However, the proposed algorithm can preserve more detail information while removing the noise, since the algorithm utilizes the intensity similarity of neighbor pixels. The simulation results show that, compared with the traditional image denoising algorithms based on Partial Differential Equation (PDE), the proposed algorithm improves Signal-to-Noise ratio (SNR) and Peak-Signal-to-Noise Ratio (PSNR) to 16.602480dB and 31.284672dB respectively, and enhances anti-noise capability. At the same time, the filtered image preserves more detail features such as weak edges and textures and has good visual effects. Therefore, the algorithm achieves a good balance between noise reduction and edge maintenance.

Reference | Related Articles | Metrics
Fuzzy diffusion PET reconstruction algorithm based on anatomical non-local means prior
SHANG Guanhong LIU Yi ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2013, 33 (09): 2627-2630.   DOI: 10.11772/j.issn.1001-9081.2013.09.2627
Abstract748)      PDF (608KB)(397)       Save
A fuzzy diffusion Positron Emission Tomography (PET) reconstruction algorithm based on anatomical non-local means prior was proposed to solve the problem in traditional Maximum A Posteriori (MAP) algorithm, that the details at low gradient value of reconstruction image cannot be maintained effectively and the appeared ladder artifacts. Firstly, the median prior distribution MAP reconstruction algorithm was improved, namely an anisotropic diffusion filter combined with fuzzy function was introduced before each median filtering. Secondly, the fuzzy membership function was used as diffusion coefficient in the anisotropic diffusion process, and the details of the image were considered by anatomical non-local prior information. The simulation results show that, compared with the traditional algorithms, the new algorithm improves the Signal-to-Noise Ratio (SNR) and anti-noise capability, and has good visual effects and clear edges. Thus the algorithm achieves a good balance between noise reduction and edge maintenance.
Related Articles | Metrics
Non-local means denoising approach based on dictionary learning
CUI Xueying ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2013, 33 (05): 1420-1422.   DOI: 10.3724/SP.J.1087.2013.01420
Abstract872)      PDF (529KB)(581)       Save
Concerning the measurement of the similarity of non-local means, a method based on dictionary learning was presented. First, block matching based local pixel grouping was used to eliminate the interference by dissimilar image blocks. Then, the corrupted similar blocks were denoised by dictionary learning. As a further development of classical sparse representation model, the similar patches were unified for joint sparse representation and learning an efficient and compact dictionary by principal component analysis, so that the similar patches relevency could be well preserved. This similarity between the pixels was measured by the Euclidean distance of denoised image blocks,which can well show the similarity of the similar blocks. The experimental results show the modified algorithm has a superior denoising performance than the original one in terms of both Peak Signal-to-Noise Ratio (PSNR) and subjective visual quality. For some images whose structural similarity is large and with rich detail information, their structures and details are well preserved. The robustness of the presented method is superior to the original one.
Reference | Related Articles | Metrics
Deterministic prediction of wavelet neural network model and its application
PAN Yumin DENG Yonghong ZHANG Quanzhu
Journal of Computer Applications    2013, 33 (04): 1001-1005.   DOI: 10.3724/SP.J.1087.2013.01001
Abstract877)      PDF (812KB)(564)       Save
Concerning the random prediction results of the neural network model, a compact wavelet neural network was constructed. The method transferred the wavelet function into the hidden layer of the Back-Propagation (BP) network and made use of a random certain state command to obtain the definite prediction results. Compared with the wavelet neural network realized by programming and BP network, this method is suitable for mass data training and has such advantages as strong adaptability and robustness for data samples, especially has better adaptability for high frequency stochastic time series, and has characteristics of determined predicted results, powerful practicability and so on. It can obviously improve the training speed, prediction accuracy and prediction efficiency of the model. Its efficiency has been proved by the gas emission prediction experiment of wavelet packet transformation and wavelet neural network.
Reference | Related Articles | Metrics
High quality ordered subset expectation maximization reconstruction algorithm based on multi-resolution for PET images
ZHANG Quan FU Xuejing LI Xiaohong GUI Zhiguo
Journal of Computer Applications    2013, 33 (03): 648-650.   DOI: 10.3724/SP.J.1087.2013.00648
Abstract844)      PDF (633KB)(520)       Save
In Positron Emission Tomography (PET) imaging, Maximum Likelihood Expectation Maximization (MLEM) algorithm cannot be directly applied to clinical diagnosis due to suppressing noise ineffectively and converging slowly. Although Ordered Subset Expectation Maximization (OSEM) algorithm converges fast, it will lead to a significant decline in the quality of the reconstructed image. To address this problem, multi-resolution technology was introduced into the subset of the OSEM reconstruction algorithm to suppress noise and stabilize solving process. The experimental results indicate that the new algorithm overcomes the drawback of the traditional algorithm on degrading the reconstructed image and has the advantage of fast convergence. The proposed reconstruction algorithm can obtain a higher Signal-to-Noise Ratio (SNR) and a superior visual effect.
Reference | Related Articles | Metrics
High quality median prior image reconstruction algorithm based on wavelet shrinkage and forward-and-backward diffusion
LI Xiao-hong ZHANG Quan LIU Yi GUI Zhi-guo
Journal of Computer Applications    2012, 32 (12): 3357-3360.   DOI: 10.3724/SP.J.1087.2012.03357
Abstract886)      PDF (810KB)(471)       Save
A median priori image reconstruction algorithm based on mixed model was put forward to solve the problems of over-smoothness and stepladder edge of reconstructed image by Maximum A Posterior (MAP). First,in the median priori distribution of MAP reconstruction method,the combination of wavelet shrinkage and forward-and-backward anisotropic diffusion filter was introduced before each of median filtering. In addition, if the background area still kept a small amount of noise, the fine filter with a nonlinear diffusion that smoothed the smaller image gradient threshold region could be chosen to join in the last of iteration,so as to optimize the image.The simulation results show that the algorithm has good performance in both lowering noise effect and preserving edges. Compared with other classical algorithms,the Signal-to-Noise Ratio (SNR) can be improved by 0.9dB to 3.8dB.
Related Articles | Metrics
Study and implementation of security protocols for wireless local network
LIN Qin,ZHANG Hao-jun, YANG Feng, ZHANG Quan-lin
Journal of Computer Applications    2005, 25 (01): 160-162.   DOI: 10.3724/SP.J.1087.2005.0160
Abstract976)      PDF (153KB)(1756)       Save
Development of WLAN and its security protocols were presented. Then researches were made on some of the most important security standards formed in the development. Finally, based on the characteristics of these protocols the implementation was displayed.
Related Articles | Metrics